Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Hematology ; 28(1): 2248433, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37642342

RESUMO

OBJECTIVE: This study aims to evaluate the consistency of heterogeneity degree of erythrocyte volume parameters between the blood automated analyzer Sysmex-XN9000 and the advanced red blood cell software CellaVisionDI-60. METHOD: 500 blood samples of volunteers were analyzed by Sysmex-XN9000 and CellaVision-DI60. The sensitivity, specificity, positive predictive value, negative predictive value, false positive rate, and false negative rate were evaluated. The consistency of all parameters was tested. RESULT: Taking the standard RBC group as the control group, the RBC parameters of the macrocytic and the microcytic group were compared. There was a statistical difference between the groups. ROC curve analysis showed that the best cutoff value of microcytic and of macrocytic affecting MCV were 4.1% and 5.7%, respectively. The best cutoff value of anisocytosis was 15.0%. The correlation coefficient between anisocytosis and red blood cell distribution width (RDW-CV) was 0.756. The sensitivity, specificity, positive predictive value and coincidence rate of anisocytosis were high. The false negative rate was 10.0%, and the false positive rate was 7.4%. CONCLUSION: All parameters of the degree of heterogeneity have good accuracy and consistency in the two instruments. Anisocytosis is with higher coincidence rate and positive predictive value. MIC and MAC have a good prediction on the increase or decrease of MCV. Although advanced RBC software's false negative and false positive rates are high, the red blood cell image system is more intuitive and time-saving in observing cells. Consequently, CellaVision-DI60 is suggested to combine with XN-9000 for judging the anisocytosis in daily work comprehensively.


Assuntos
Índices de Eritrócitos , Eritrócitos , Humanos , Curva ROC , Software
2.
IEEE Trans Pattern Anal Mach Intell ; 44(10): 6153-6168, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34061741

RESUMO

In real-world applications, it is important for machine learning algorithms to be robust against data outliers or corruptions. In this paper, we focus on improving the robustness of a large class of learning algorithms that are formulated as low-rank semi-definite programming (SDP) problems. Traditional formulations use the square loss, which is notorious for being sensitive to outliers. We propose to replace this with more robust noise models, including the l1-loss and other nonconvex losses. However, the resultant optimization problem becomes difficult as the objective is no longer convex or smooth. To alleviate this problem, we design an efficient algorithm based on majorization-minimization. The crux is on constructing a good optimization surrogate, and we show that this surrogate can be efficiently obtained by the alternating direction method of multipliers (ADMM). By properly monitoring ADMM's convergence, the proposed algorithm is empirically efficient and also theoretically guaranteed to converge to a critical point. Extensive experiments are performed on four machine learning applications using both synthetic and real-world data sets. Results show that the proposed algorithm is not only fast but also has better performance than the state-of-the-arts.

3.
IEEE Trans Neural Netw Learn Syst ; 30(11): 3517-3527, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31403443

RESUMO

Many machine learning problems involve learning a low-rank positive semidefinite matrix. However, existing solvers for this low-rank semidefinite program (SDP) are often expensive. In this paper, by factorizing the target matrix as a product of two matrices and using a Courant penalty to penalize for their difference, we reformulate the SDP as a biconvex optimization problem. This allows the use of multiconvex optimization techniques to define simple surrogates, which can be minimized easily by block coordinate descent. Moreover, while traditionally this biconvex problem approaches the original problem only when the penalty parameter is infinite, we show that the two problems are equivalent when the penalty parameter is sufficiently large. Experiments on a number of SDP applications in machine learning show that the proposed algorithm is as accurate as other state-of-the-art algorithms, but is much faster, especially on large data sets.

4.
IEEE Trans Neural Netw Learn Syst ; 26(9): 1927-38, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25343772

RESUMO

Nonparametric kernel learning (NPKL) is a flexible approach to learn the kernel matrix directly without assuming any parametric form. It can be naturally formulated as a semidefinite program (SDP), which, however, is not very scalable. To address this problem, we propose the combined use of low-rank approximation and block coordinate descent (BCD). Low-rank approximation avoids the expensive positive semidefinite constraint in the SDP by replacing the kernel matrix variable with V(T)V, where V is a low-rank matrix. The resultant nonlinear optimization problem is then solved by BCD, which optimizes each column of V sequentially. It can be shown that the proposed algorithm has nice convergence properties and low computational complexities. Experiments on a number of real-world data sets show that the proposed algorithm outperforms state-of-the-art NPKL solvers.

5.
IEEE Trans Neural Netw ; 21(11): 1831-41, 2010 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-20923733

RESUMO

The goal of semisupervised kernel matrix learning (SS-KML) is to learn a kernel matrix on all the given samples on which just a little supervised information, such as class label or pairwise constraint, is provided. Despite extensive research, the performance of SS-KML still leaves some space for improvement in terms of effectiveness and efficiency. For example, a recent pairwise constraints propagation (PCP) algorithm has formulated SS-KML into a semidefinite programming (SDP) problem, but its computation is very expensive, which undoubtedly restricts PCPs scalability in practice. In this paper, a novel algorithm, called kernel propagation (KP), is proposed to improve the comprehensive performance in SS-KML. The main idea of KP is first to learn a small-sized sub-kernel matrix (named seed-kernel matrix) and then propagate it into a larger-sized full-kernel matrix. Specifically, the implementation of KP consists of three stages: 1) separate the supervised sample (sub)set X(l) from the full sample set X; 2) learn a seed-kernel matrix on X(l) through solving a small-scale SDP problem; and 3) propagate the learnt seed-kernel matrix into a full-kernel matrix on X . Furthermore, following the idea in KP, we naturally develop two conveniently realizable out-of-sample extensions for KML: one is batch-style extension, and the other is online-style extension. The experiments demonstrate that KP is encouraging in both effectiveness and efficiency compared with three state-of-the-art algorithms and its related out-of-sample extensions are promising too.


Assuntos
Algoritmos , Inteligência Artificial , Redes Neurais de Computação , Computação Matemática , Distribuição Normal , Linguagens de Programação , Design de Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...